Artificial Intelligence and civil liability: Europe at a regulatory crossroads

The European Union is undecided at a regulatory crossroads. After adopting the AI Act, the world's first comprehensive regulatory framework on Artificial Intelligence, the Commission is threatening to withdraw its proposal for a directive on civil liability and AI (AILD). The fear is that excessive regulation will scare businesses away.
Andrea Bertolini, associate professor at the Istituto Dirpolis of the Scuola Superiore Sant'Anna in Pisa, addresses the issue in a study requested by the European Parliament on “Artificial intelligence and civil liability – A European perspective”.
The study is a key contribution to the debate on the legal future of artificial intelligence in Europe, highlighting how the absence of a uniform European regulatory framework necessarily leads to fragmentation and a proliferation of potentially divergent national solutions. In this case, the absence of rules produces an effect of overregulation that is very damaging to businesses.
The study highlights some critical issues in the proposed directive on civil liability and AI (AILD) currently under consideration, which risks being largely inadequate. In fact, by providing only procedural solutions and not establishing a clear and well-defined rule of liability, it would not be able to prevent fragmentation and the development of different solutions in the 27 Member States.
Instead, it is essential to focus regulatory intervention only on high-risk AI systems, avoiding excessive regulation of low-impact applications and introducing alternative, more effective liability models, such as strict liability or presumed fault rules, to provide greater legal certainty, protection for users and sustainability for businesses.